16 research outputs found

    AI for Health and Well Being @SI Lab

    Get PDF
    This presentation was delivered in the framework of a bilateral meeting between CNR and IVI on September 5, 2023

    Monitoring Ancient Buildings: Real Deployment of an IoT System Enhanced by UAVs and Virtual Reality

    Get PDF
    The historical buildings of a nation are the tangible signs of its history and culture. Their preservation deserves considerable attention, being of primary importance from a historical, cultural, and economic point of view. Having a scalable and reliable monitoring system plays an important role in the Structural Health Monitoring (SHM): therefore, this paper proposes an Internet Of Things (IoT) architecture for a remote monitoring system that is able to integrate, through the Virtual Reality (VR) paradigm, the environmental and mechanical data acquired by a wireless sensor network set on three ancient buildings with the images and context information acquired by an Unmanned Aerial Vehicle (UAV). Moreover, the information provided by the UAV allows to promptly inspect the critical structural damage, such as the patterns of cracks in the structural components of the building being monitored. Our approach opens new scenarios to support SHM activities, because an operator can interact with real-time data retrieved from a Wireless Sensor Network (WSN) by means of the VR environment

    Mirror mirror on the wall... an unobtrusive intelligent multisensory mirror for well-being status self-assessment and visualization

    Get PDF
    A person’s well-being status is reflected by their face through a combination of facial expressions and physical signs. The SEMEOTICONS project translates the semeiotic code of the human face into measurements and computational descriptors that are automatically extracted from images, videos and 3D scans of the face. SEMEOTICONS developed a multisensory platform in the form of a smart mirror to identify signs related to cardio-metabolic risk. The aim was to enable users to self-monitor their well-being status over time and guide them to improve their lifestyle. Significant scientific and technological challenges have been addressed to build the multisensory mirror, from touchless data acquisition, to real-time processing and integration of multimodal data

    State of the art of audio- and video based solutions for AAL

    Get PDF
    Working Group 3. Audio- and Video-based AAL ApplicationsIt is a matter of fact that Europe is facing more and more crucial challenges regarding health and social care due to the demographic change and the current economic context. The recent COVID-19 pandemic has stressed this situation even further, thus highlighting the need for taking action. Active and Assisted Living (AAL) technologies come as a viable approach to help facing these challenges, thanks to the high potential they have in enabling remote care and support. Broadly speaking, AAL can be referred to as the use of innovative and advanced Information and Communication Technologies to create supportive, inclusive and empowering applications and environments that enable older, impaired or frail people to live independently and stay active longer in society. AAL capitalizes on the growing pervasiveness and effectiveness of sensing and computing facilities to supply the persons in need with smart assistance, by responding to their necessities of autonomy, independence, comfort, security and safety. The application scenarios addressed by AAL are complex, due to the inherent heterogeneity of the end-user population, their living arrangements, and their physical conditions or impairment. Despite aiming at diverse goals, AAL systems should share some common characteristics. They are designed to provide support in daily life in an invisible, unobtrusive and user-friendly manner. Moreover, they are conceived to be intelligent, to be able to learn and adapt to the requirements and requests of the assisted people, and to synchronise with their specific needs. Nevertheless, to ensure the uptake of AAL in society, potential users must be willing to use AAL applications and to integrate them in their daily environments and lives. In this respect, video- and audio-based AAL applications have several advantages, in terms of unobtrusiveness and information richness. Indeed, cameras and microphones are far less obtrusive with respect to the hindrance other wearable sensors may cause to one’s activities. In addition, a single camera placed in a room can record most of the activities performed in the room, thus replacing many other non-visual sensors. Currently, video-based applications are effective in recognising and monitoring the activities, the movements, and the overall conditions of the assisted individuals as well as to assess their vital parameters (e.g., heart rate, respiratory rate). Similarly, audio sensors have the potential to become one of the most important modalities for interaction with AAL systems, as they can have a large range of sensing, do not require physical presence at a particular location and are physically intangible. Moreover, relevant information about individuals’ activities and health status can derive from processing audio signals (e.g., speech recordings). Nevertheless, as the other side of the coin, cameras and microphones are often perceived as the most intrusive technologies from the viewpoint of the privacy of the monitored individuals. This is due to the richness of the information these technologies convey and the intimate setting where they may be deployed. Solutions able to ensure privacy preservation by context and by design, as well as to ensure high legal and ethical standards are in high demand. After the review of the current state of play and the discussion in GoodBrother, we may claim that the first solutions in this direction are starting to appear in the literature. A multidisciplinary 4 debate among experts and stakeholders is paving the way towards AAL ensuring ergonomics, usability, acceptance and privacy preservation. The DIANA, PAAL, and VisuAAL projects are examples of this fresh approach. This report provides the reader with a review of the most recent advances in audio- and video-based monitoring technologies for AAL. It has been drafted as a collective effort of WG3 to supply an introduction to AAL, its evolution over time and its main functional and technological underpinnings. In this respect, the report contributes to the field with the outline of a new generation of ethical-aware AAL technologies and a proposal for a novel comprehensive taxonomy of AAL systems and applications. Moreover, the report allows non-technical readers to gather an overview of the main components of an AAL system and how these function and interact with the end-users. The report illustrates the state of the art of the most successful AAL applications and functions based on audio and video data, namely (i) lifelogging and self-monitoring, (ii) remote monitoring of vital signs, (iii) emotional state recognition, (iv) food intake monitoring, activity and behaviour recognition, (v) activity and personal assistance, (vi) gesture recognition, (vii) fall detection and prevention, (viii) mobility assessment and frailty recognition, and (ix) cognitive and motor rehabilitation. For these application scenarios, the report illustrates the state of play in terms of scientific advances, available products and research project. The open challenges are also highlighted. The report ends with an overview of the challenges, the hindrances and the opportunities posed by the uptake in real world settings of AAL technologies. In this respect, the report illustrates the current procedural and technological approaches to cope with acceptability, usability and trust in the AAL technology, by surveying strategies and approaches to co-design, to privacy preservation in video and audio data, to transparency and explainability in data processing, and to data transmission and communication. User acceptance and ethical considerations are also debated. Finally, the potentials coming from the silver economy are overviewed.publishedVersio

    State of the Art of Audio- and Video-Based Solutions for AAL

    Get PDF
    It is a matter of fact that Europe is facing more and more crucial challenges regarding health and social care due to the demographic change and the current economic context. The recent COVID-19 pandemic has stressed this situation even further, thus highlighting the need for taking action. Active and Assisted Living technologies come as a viable approach to help facing these challenges, thanks to the high potential they have in enabling remote care and support. Broadly speaking, AAL can be referred to as the use of innovative and advanced Information and Communication Technologies to create supportive, inclusive and empowering applications and environments that enable older, impaired or frail people to live independently and stay active longer in society. AAL capitalizes on the growing pervasiveness and effectiveness of sensing and computing facilities to supply the persons in need with smart assistance, by responding to their necessities of autonomy, independence, comfort, security and safety. The application scenarios addressed by AAL are complex, due to the inherent heterogeneity of the end-user population, their living arrangements, and their physical conditions or impairment. Despite aiming at diverse goals, AAL systems should share some common characteristics. They are designed to provide support in daily life in an invisible, unobtrusive and user-friendly manner. Moreover, they are conceived to be intelligent, to be able to learn and adapt to the requirements and requests of the assisted people, and to synchronise with their specific needs. Nevertheless, to ensure the uptake of AAL in society, potential users must be willing to use AAL applications and to integrate them in their daily environments and lives. In this respect, video- and audio-based AAL applications have several advantages, in terms of unobtrusiveness and information richness. Indeed, cameras and microphones are far less obtrusive with respect to the hindrance other wearable sensors may cause to one’s activities. In addition, a single camera placed in a room can record most of the activities performed in the room, thus replacing many other non-visual sensors. Currently, video-based applications are effective in recognising and monitoring the activities, the movements, and the overall conditions of the assisted individuals as well as to assess their vital parameters. Similarly, audio sensors have the potential to become one of the most important modalities for interaction with AAL systems, as they can have a large range of sensing, do not require physical presence at a particular location and are physically intangible. Moreover, relevant information about individuals’ activities and health status can derive from processing audio signals. Nevertheless, as the other side of the coin, cameras and microphones are often perceived as the most intrusive technologies from the viewpoint of the privacy of the monitored individuals. This is due to the richness of the information these technologies convey and the intimate setting where they may be deployed. Solutions able to ensure privacy preservation by context and by design, as well as to ensure high legal and ethical standards are in high demand. After the review of the current state of play and the discussion in GoodBrother, we may claim that the first solutions in this direction are starting to appear in the literature. A multidisciplinary debate among experts and stakeholders is paving the way towards AAL ensuring ergonomics, usability, acceptance and privacy preservation. The DIANA, PAAL, and VisuAAL projects are examples of this fresh approach. This report provides the reader with a review of the most recent advances in audio- and video-based monitoring technologies for AAL. It has been drafted as a collective effort of WG3 to supply an introduction to AAL, its evolution over time and its main functional and technological underpinnings. In this respect, the report contributes to the field with the outline of a new generation of ethical-aware AAL technologies and a proposal for a novel comprehensive taxonomy of AAL systems and applications. Moreover, the report allows non-technical readers to gather an overview of the main components of an AAL system and how these function and interact with the end-users. The report illustrates the state of the art of the most successful AAL applications and functions based on audio and video data, namely lifelogging and self-monitoring, remote monitoring of vital signs, emotional state recognition, food intake monitoring, activity and behaviour recognition, activity and personal assistance, gesture recognition, fall detection and prevention, mobility assessment and frailty recognition, and cognitive and motor rehabilitation. For these application scenarios, the report illustrates the state of play in terms of scientific advances, available products and research project. The open challenges are also highlighted. The report ends with an overview of the challenges, the hindrances and the opportunities posed by the uptake in real world settings of AAL technologies. In this respect, the report illustrates the current procedural and technological approaches to cope with acceptability, usability and trust in the AAL technology, by surveying strategies and approaches to co-design, to privacy preservation in video and audio data, to transparency and explainability in data processing, and to data transmission and communication. User acceptance and ethical considerations are also debated. Finally, the potentials coming from the silver economy are overviewed

    A portable, intelligent, customizable device for human breath analysis

    No full text
    Breath analysis allows for monitoring the metabolic processes that occur in human body in a non-invasive way. Comparing with other traditional methods such as blood test, breath analysis is harmless to not only the subjects but also the personnel who collect the samples. However, despite its great potential, only few breath tests are commonly used in clinical practice nowadays. Breath analysis has not gained a wider use yet. One of the main reasons is related to standard instrumentation for gas analysis. Standard instrumentation, such as gas chromatography, is expensive and time consuming. Its use, as well as the interpretation of the results, often requires specialized personnel. E-nose systems, based on gas sensor array, are easier to use and able to analyze gases in real time, but, although cheaper than a gas chromatograph, their cost remains high. During my research activity, carried on at the Signals and Images Laboratory (SiLab) of the Institute of Information Science and Technology (ISTI) of the National Research Council (CNR), I design and developed the so called Wize Sniffer (WS), a device able to accurately analyze human breath composition and, at the same time, to overcome the limitations of existing instrumentation for gas analysis. The idea of theWize Sniffer was born in the framework of SEMEiotic Oriented Technology for Individual’s CardiOmetabolic risk self-assessmeNt and Self-monitoring (SEMEOTICONS, www.semeoticons.eu) European Project, and it was designed for detecting, in human breath, those molecules related to the noxious habits for cardio-metabolic risk. The clinical assumption behind the Wize Sniffer lied in the fact that harmful habits such as alcohol consumption, smoking, unhealthy diet cause a variation in the concentration of a set of molecules (among which carbon monoxide, ethanol, hydrogen, hydrogen sulfide) in the exhaled breath. Therefore, the goal was to realize a portable and easy-to-use device, based on cheap electronics, to be used by anybody at their home. The main contributions of my work were the following: - design and development of a portable, low cost, customizable, easy to use device, able to be used in whichever context of use: I succeeded in this with using cheap commercial discrete gas sensors and an Arduino board, wrote the software and calibrated the system; - development of a method to analyze breath composition and understand individual’s cardio-metabolic risk; I also validated it with success on real people. Given such good outcomes, I wanted the Wize Sniffer took a further step forward, towards the diagnosis in particular. The application field regarded the chronic liver impairment, as the studies which involve e-nose systems in the identification of liver disease are still few. In addition, the diagnosis of liver impairment often requires very invasive clinical test (biopsy, for instance). In this proof-of-concept study, the Wize Sniffer showed good diagnosis-oriented properties in discriminating the severity of liver disease (absence of disease, chronic liver disease, cirrhosis, hepatic encephalopathy) on the base of the detected ammonia

    Development of a new portable device designed for selective olfaction

    No full text
    Digital semeiotics is one of the newest recent challenges for assessing a number of computational descriptors to atherosclerotic cardiovascular diseases that are leading causes of mortality worldwide. These descriptors can be expressed involving (i) morphometric, biometrics and colorimetric of the face; (ii)spectroscopic analysis of skin and iris, of sub-cutaneous substances and the function of subcutaneous tissues, and (iii) compositional analysis of breath and exhaled. In this work, we describe the design and functionality of the first prototype of Wize Sniffer (WS: "Wize Sniffer 1.0", a new portable device for breath analysis limited to an effective number of substances. Within the european SEMEOTICON Project, by the WS, we intend a hardware/software tool for both the analysis of volatile organic compounds of breath and a platform for data mining and data integration. The WS should be able to provide useful information about the “breathprint”, i.e., the analog of fingerprint for the state of health of an individual

    Cardio-metabolic Diseases Prevention by Self-monitoring the Breath

    No full text
    As new as very promising technique, breath analysis allows for monitoring the biochemical processes that occur in human body in a non-invasive way. Nevertheless, the high costs for standard analytical instrumentation (i.e., gas chromatograph, mass spectrometer), the need for specialized personnel able to read the results and the lack of protocols to collect breath samples, set limit to the exploitation of breath analysis in clinical practice. Here, we describe the development of a device, named Wize Sniffer, which is portable and entirely based on low cost technology: it uses an array of commercial, semiconductor gas sensors and a widely employed open source controller, an Arduino Mega2560 with Ethernet module. In addition, it is very easy-to-use also for non-specialized personnel and able to analyze in real time the composition of the breath. The Wize Sniffer is composed of three modules: signal measurement module, signal conditioning module and signal processing module. The idea was born in the framework of European SEMEiotic Oriented Technology for Individual's CardiOmetabolic risk self-assessmeNt and Self-monitoring (SEMEOTICONS) Project, in order to monitor individual's lifestyle by detecting in the breath those molecules related to the noxious habits for cardio-metabolic risk (alcohol intake, smoking, wrong diet). Nonetheless, the modular configuration of the device allows for changing the sensors according to the molecules to be detected, thus fully exploiting the potential of breath analysis

    An E-Nose for the Monitoring of Severe Liver Impairment: A Preliminary Study

    No full text
    Biologically inspired to mammalian olfactory system, electronic noses became popular during the last three decades. In literature, as well as in daily practice, a wide range of applications are reported. Nevertheless, the most pioneering one has been (and still is) the assessment of the human breath composition. In this study, we used a prototype of electronic nose, called Wize Sniffer (WS) and based it on an array of semiconductor gas sensor, to detect ammonia in the breath of patients suffering from severe liver impairment. In the setting of severely impaired liver, toxic substances, such as ammonia, accumulate in the systemic circulation and in the brain. This may result in Hepatic Encephalopathy (HE), a spectrum of neuro–psychiatric abnormalities which include changes in cognitive functions, consciousness, and behaviour. HE can be detected only by specific but time-consuming and burdensome examinations, such as blood ammonia levels assessment and neuro-psychological tests. In the presented proof-of-concept study, we aimed at investigating the possibility of discriminating the severity degree of liver impairment on the basis of the detected breath ammonia, in view of the detection of HE at its early stage
    corecore